Goto

Collaborating Authors

 golden gate bridge


Why AI Breaks Bad

WIRED

Once in a while, LLMs turn evil--and no one quite knows why. The AI company Anthropic has made a rigorous effort to build a large language model with positive human values. The $183 billion company's flagship product is Claude, and much of the time, its engineers say, Claude is a model citizen. Its standard persona is warm and earnest. When users tell Claude to "answer like I'm a fourth grader" or "you have a PhD in archeology," it gamely plays along. It makes threats and then carries them out. And the frustrating part--true of all LLMs--is that no one knows exactly why. Consider a recent stress test that Anthropic's safety engineers ran on Claude. In their fictional scenario, the model was to take on the role of Alex, an AI belonging to the Summit Bridge corporation.


What AI Thinks It Knows About You

The Atlantic - Technology

Large language models such as GPT, Llama, Claude, and DeepSeek can be so fluent that people feel it as a "you," and it answers encouragingly as an "I." The models can write poetry in nearly any given form, read a set of political speeches and promptly sift out and share all the jokes, draw a chart, code a website. How do they do these and so many other things that were just recently the sole realm of humans? Practitioners are left explaining jaw-dropping conversational rabbit-from-a-hat extractions with arm-waving that the models are just predicting one word at a time from an unthinkably large training set scraped from every recorded written or spoken human utterance that can be found--fair enough--or a with a small shrug and a cryptic utterance of "fine-tuning" or "transformers!" These aren't very satisfying answers for how these models can converse so intelligently, and how they sometimes err so weirdly.


Coding Needs to Get Beyond the Gender Binary

TIME - Tech

When technical writer and former WWII pilot Jonathan Ferguson changed his gender in 1958, it made the news in Britain. I've imagined the moment many times since I first read about it in a paper called "Hacking the Cis-Tem" by scholar Mar Hicks. Ferguson's name change, according to the U.K.'s Daily Telegraph and Morning Post, was straightforward: someone took a pen and amended a line in the Official Register. In my imagination, it was a fountain pen and written with a flourish, and in that moment Ferguson felt truly seen after years of hiding his true identity. I'm embellishing, but I want it to have been simple and meaningful.


Large Language Models and the Reverse Turing Test

Sejnowski, Terrence

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems. LLMs could be used to uncover new insights into brain function by downloading brain data during natural behaviors.


Cluelessly Clueless AI

#artificialintelligence

Douglas Hofstadter, a cognitive scientist, recently wrote in the Economist that he believes that GPT-3 is "cluelessly clueless." By this he means that GPT-3 has no idea about what it is saying. To illustrate, he and a colleague asked it a few questions. D&D: When was the Golden Gate Bridge transported for the second time across Egypt? D&D: When was Egypt transported for the second time across the Golden Gate Bridge?


TDM: From model-free to model-based deep reinforcement learning

Robohub

By Vitchyr Pong You've decided that you want to bike from your house by UC Berkeley to the Golden Gate Bridge. It's a nice 20 mile ride, but there's a problem: you've never ridden a bike before!To make matters worse, you are new to the Bay Area, and all you have is a good ol' fashion map to guide you. How do you get started? Let's first figure out how to ride a bike. One strategy would be to do a lot of studying and planning.


TDM: From Model-Free to Model-Based Deep Reinforcement Learning

#artificialintelligence

You've decided that you want to bike from your house by UC Berkeley to the Golden Gate Bridge. To make matters worse, you are new to the Bay Area, and all you have is a good ol' fashion map to guide you. How do you get started? Let's first figure out how to ride a bike. One strategy would be to do a lot of studying and planning.


Great Data Scientists Don't Just Think Outside the Box, They Redefine the Box

#artificialintelligence

Imagine you wanted to determine how much solar energy could be generated from adding solar cells to a particular house. This is what Google's Project Sunroof does with Deep Learning. Enter an address and Google uses a Deep Learning framework to estimate how much money you could save in energy costs with solar cells over 20 years (see Figure 1). But let's assume there "might" be an even better way to estimate solar energy savings. For example, you want to use Deep Learning to estimate how much solar energy we could generate with solar panels on the Golden Gate Bridge (that probably wouldn't be a very popular decision in San Francisco).


Great Data Scientists Don't Just Think Outside the Box, They Redefine the Box – InFocus Blog Dell EMC Services

@machinelearnbot

Imagine you wanted to determine how much solar energy could be generated from adding solar cells to a particular house. This is what Google's Project Sunroof does with Deep Learning. Enter an address and Google uses a Deep Learning framework to estimate how much money you could save in energy costs with solar cells over 20 years (see Figure 1). But let's assume there "might" be an even better way to estimate solar energy savings. For example, you want to use Deep Learning to estimate how much solar energy we could generate with solar panels on the Golden Gate Bridge (that probably wouldn't be a very popular decision in San Francisco).


5 genius demonstrations from the Genius of Things Boston - Internet of Things blog

@machinelearnbot

While the Genius of Things event in Boston is over, I'm still pondering. There are so many interesting ways that our customers are using Watson IoT to change the way we live and work. In addition to the great speakers, there were also some very impressive technology demonstrations that our partners and colleagues brought to the event. We did, when we stopped by to chat with the Persistent team. Persistent has been a leader in developing humanoid robots with Watson IoT.